The AI Podcast cover art
THE AI PODCASTHOSTED BYNVIDIA

One person, one interview, one story. Join us as we explore the impact of AI on our world, one amazing person at a time -- from the wildlife biologist tracking endangered rhinos across the savannah here on Earth to astrophysicists analyzing 10 billion-year-old starlight in distant galaxies to the Walmart data scientist grappling with the hundreds of millions of parameters lurking in the retailer’s supply chain. Every two weeks, we’ll bring you another tale, another 25-minute interview, as we build a real-time oral history of AI that’s already garnered nearly 3.4 million listens and been acclaimed as one of the best AI and machine learning podcasts. Listen in and get inspired. https://blogs.nvidia.com/ai-podcast/

Popular Clips

That was a few minutes. Sure. And then if you're moving or your target moving, then, get lost. Right. Yeah. You're done. Yeah. There's nothing there. So what we've done here is, like, we've cut that $100 price down. We've taken 0 off, and we've also on my finance direction, I'm not gonna like that. But anyway, and then we've also made a video rate. So rather than waiting 2 minutes for one image, We can go sixty frames a second. Wow. Okay. You use it like a normal camera. Yeah. There's just better because you sip more colors. You see better detail. You can do more. So there are so many advantages is now that we're in this age of computer vision is that one of the biggest problems is gathering your giant data set. Right. Say if you wanted to see something really niche or specific in your field, say, one of our customers want to just see the difference between different apple varieties no matter how many images they gathered, they couldn't see difference between a Bravern or a golden delicious, turns out you only needed 10 hyperspectral images of each apple variety to tell the difference to basically 90 something percent accuracy. Wow. The other weird thing is I shouldn't say this out loud since Jensen said how big a bigger service is that we train that model in about 0.3 seconds. It will we'll keep that between us, but that's fantastic. So, I think I know the answer to this, but I'm gonna ask anyway, when you talk about taking a hyper spectral image and then seeing. What is that actually like for the end user? Is it data represented in a chart, on a graph, is it a long text string, or are we somehow magically make making me able to see the other ninety three colors? I couldn't previously? Oh, this is a really spicy good topic is that you can't visualize this because it's literally invisible to the human eye. We realize that. Partially through the copying. It's been around a few years. We can still cause that as a startup, but anyway, is that you needed to give people the tools Yep. To work with hyper spectral data. If you just try to dump hyper spectral

To put a lot of work in and a lot of investment in order to actually make the front end so that because I believed in this. I believed that you cannot have a data company and a data improvement company if you can't see the data. And yet everyone pushed from all directions, other founders, investors, all over the world, saying, look, you should just do API. That's what everyone else engineers at CleanLab that are gonna smile when you hear that. Good. A lot of front engineers at CleanLab that are gonna smile when they hear that. Good. Excellent. I hear that. Speaking with Stephen Gawthorpe and Curtis Northcutt. Stephen is a senior data scientist at Berkeley Research Group, and Curtis is the CEO and cofounder of CleanLab. We're talking data. We're talking investigations, and, we're talking about the value of being able to see your data. I think for for a lot of folks, I mean, that that's interesting that it came up because, for a lot of folks, it is scary. Right? And this idea of these vast data sets and not even knowing where to start, let alone how to start making actionable sense out of these things. And so there's a a lot of reliance on, you know, whatever it is, the business application that surfaces insights on its own and that sort of stuff. And so it's it's interesting, to to hear that about a data company putting that that front end interface first. You've both been at this for for long enough, you know, that sort of before Gen AI started to become a big thing. And now, you know, we're sitting at this conference that, it's Gen AI, this, LLMs that all over the place. No disrespect to the robots. But you know, we're talking about about this stuff now. How has I guess, Steven, I'll start with you. How is generative AI and other you mentioned NLP a little bit before. How have these newer technologies kind of merged with some of the more traditional data science techniques that that you came up on? Or

It's it's actually an XR headset by all the technicians. Right. Right. Right. It's an XR headset, so it doesn't have a it doesn't it doesn't it's an augmented reality headset, but it augments with haptics. Right. It doesn't go on your eyes, so it sits above above your eyes on the forehead. On the forehead. Okay. And that's where the haptics are and everything. But, you know, it has speakers. You can you can talk to it. It has voice vocal recognition. It has, it has everything a VR headset has except the screens. Right. So so maybe a way to get into this is to because you started to talk a little bit about it. How does it what's the experience like when when someone puts the headset on? Mhmm. Walk us through that. Sure. So the first time we put a headset on, it actually starts in a tutorial mode. So it begins talking to you, so you you can begin training with it. So the first thing you will will teach you roughly where the buttons are, what are the parts of the system. So we have 6 cameras in front. We have the batteries and the computer in the back and everything. That's the first thing which it does. But then it goes into describing the how the haptics works and you actually saw the haptic feedback and they actually some tests and some trainings which you do with the device and the device does them automatically on you. So they ask it asks you to train to turn your head. For sorry. For people who who might not know the word haptic Yep. It's similar to if you have a phone and your phone buzzes. It's a little motor that makes vibration. There are a lot of motors which vibrate on your forehead Right. Representing the Are they all on the forehead? Yes. They're all on the forehead. Okay. So, it actually teaches you how to use them. So for example, if you feel the haptics, the vibration is going to the right, you have to turn your head to the right. Mhmm. And you do all of this training sessions so you can quickly get accustomed to to using it. And, you know, a full training is like half an hour, but here at the the booth, we do trainings in a minute and a half. Sure. Yeah. And it's still good enough for people to walk. So we have at least 40, 50 people in the last few days walking blindfolded, so we blindfold people. Yeah. We put a device on and said, okay. Handle it and you can go go and enjoy GTC without seeing a thing. I mean, if you can navigate the GTC trade show floor, you know And with, you know, 2 minutes of training. Yeah. That's it's it's damn good to do. But, you know, when we test with blind people who are used with to walk without, without visual without any kind of visual stimuli.

Make chronic diseases something that, you know, we've already solved. Just like we solved infectious diseases, we solved, you know, lots of other kinds of health care issues. Let's also tackle chronic disease. That was our main problem that we wanna go after. Okay. Now how do you go after chronic disease? Everybody thinks of, genomics, or genetics to be a fundamental aspect of your biology. Right? I mean, everybody thinks that, okay. If I understand somebody's DNA, I can actually figure out what's going on. Turns out DNA has very little to say about chronic disease. Mhmm. Okay? Some DNA variants, like, you know, mutations that may cause, you know, like, sickle cell disease or whatever. Right? I mean, those kinds of things may, over time, pick up and catch up and, you know, create certain kinds of diseases. But most chronic disease are not based on your static DNA. Remember, your DNA, once you're born until your diet, is more or less the same. Yeah. Yeah. Right? But your RNA, which is the gene expression that comes out of the DNA, is changing very regularly. Depends on your lifestyle. It depends on how you slept, you know, for the last few months. It depends on how much activity you have. You know, lots of different things impact, meaning your environment impacts your RNA. So instead of focusing on the DNA, we decided that we would focus on the RNA. Now you might say, why not focus on proteins, which are equally actually downstream of RNA? Right? The pro RNA gets, you know, you know, transcribed into into into proteins. Why not focus on proteins? Turns out, maybe we could have done that, but it turns out proteins are way more complicated. You know? DNA is complicated. RNA is more complicated. Proteins are just super complicated. I mean, in the future, I think we should be doing all of the above. But if I had to just pick 1, I think RNA is the place to go for solving chronic disease. Okay. So then you ask the question, okay. What difference does it make? Right? So let's say that, you know, I knew your gene expression.

And so, yeah, I mean, where you're training the models, but also where you're using the models has a huge impact and then the energy sources for that. And I think that's why, you know, as we think about where AI is going, there are some really interesting questions about how will it be deployed because deploying it on, you know, a a battery powered, you know, endpoint device is very different than using it in a highly efficient data center. And so there's there's lots of, I think, opportunities to figure out how we optimize different deployments of AI in ways that are, you know, serving the the, you know, broad public interest in the best ways. Right. GPUs obviously have been a huge factor in this AI explosion in recent times and and the current way that AI is used to put it that way. How has GPU acceleration transformed the energy efficiency of AI tech and and particularly if you could talk about the impact in weather and climate forecasting. Yeah. Well, I mean, obviously, we we wouldn't have the current whole, you know, current generation of AI systems without the advancements we've seen in in GPUs. Right? Just they wouldn't even be here. And what we've also seen is that when you look at, some of these, various AI models, you know, different types of classifiers, for example, the efficiency significantly improves over time. And, you know, that's generally because of 2 factors. 1, because of improvements in the hardware and 2, because of improvements in in how they're optimizing these AI models based on the improvements in the hardware. And so, you know, that's where, you know, when we look at over time, significant, efficiency growth, but there's also a question of how are they gonna use this technology to, you know, address the, overall efficiency of of the grid and overall efficiency of, you know, the the data centers as well. That's where, you know, for the data